Working with mesh data

Right now, we have an application ready and setup, but nothing on screen. Or more exactly, we only have a constant color for the whole window. It is time to change that !

In this tutorial, we will present how to bring a 3D mesh to the screen. This will go through creating it out of the API or loading it through an external file.
All code given here should be done before entering the run loop.

Let's create a triangle !

Working with the basic API

Let's start by creating that mesh from the code. This will enable us to have maximum control over what it will look like.

Let's start by including everything necessary :

#include <NilkinsGraphics/Meshes/Mesh.h> #include <NilkinsGraphics/Meshes/MeshManager.h>

Now, we can first create the mesh :

// Create a mesh by requesting the manager nkGraphics::Mesh* mesh = nkGraphics::MeshManager::getInstance()->createOrRetrieve("Mesh") ;

This is a pattern you will often find in the component, and in the engine in general :

  1. Create a resource through the dedicated manager
  2. Use it
  3. Erase it, again, through the dedicated manager

In this case, the MeshManager is responsible for all memory allocations concerning meshes.
Creating a resource goes through the createOrRetrieve function, creating the resource if unavailable, or retrieving the one already there else.

This pattern has some benefits :

Now that the mesh is created, we can set it up. There are different ways, allowing for either maximum control or carefree setup. Let's first see how to use the standard mesh API, by looking at how data can be provided with it.

// We can pack data on CPU, for this one we will use the vector from nkMaths // But we could use whatever format and packing we want // Constraint is that it needs to map to a format available in attributes, or be described using strides and offsets nkMemory::BufferCast<nkMaths::Vector> pointArray (3) ; pointArray[0] = nkMaths::Vector(-1.f, -1.f, 10.f) ; pointArray[1] = nkMaths::Vector(0.f, 1.f, 10.f) ; pointArray[2] = nkMaths::Vector(1.f, -1.f, 10.f) ; // Setup the layout describing this CPU data // We name our attributes using the Nilkins convention, but these could be named differently and mapped through the layout's annotations // Attribute in this case will auto compute its offset (0) and stride (16) from its format and order in the layout attribute array nkGraphics::MeshInputLayoutAttribute positionAttribute ("POSITION") ; positionAttribute._format = nkGraphics::R32G32B32A32_FLOAT ; // Prepare layout which is only formed by the position attribute nkGraphics::MeshInputLayout inputLayout ; inputLayout.addAttribute(positionAttribute) ; // And use them to populate mesh data mesh->addVertexBufferForward(pointArray.relinquishBufferOwnership()) ; mesh->setInputLayout(inputLayout) ; mesh->setVertexCount(3) ;

First step is to shape our data as we see fit. In this example, we will use a simple buffer of vectors (4 contiguous floats), each entry supposed to describe a vertex's position.
Our data describe a simple triangle. This will be the CPU data we want to provide to the mesh's vertex buffer.

Having the mesh know about the data is good, but we also need to make it know about how to use it. For this purpose, we need a MeshInputLayout, that is formed by 1 or more MeshInputLayoutAttribute.
The input layout will allow the engine to link what the mesh has to offer to a Program's code (HLSL or GLSL), effectively enabling correct usage and intepretation of given data.

In this case, our layout will be formed by one only attribute, which should be representing the position of the vertices. One important notion here is that an attribute has a name, called semantic name, which is like a unique identifier for it. In HLSL, this semantic name is always specified next to the attribute declaration. In GLSL, for nkGraphics, this semantic name is the declaration name of the attribute. Having all of this aligned will ensure all data is correctly interpreted.
Here, we choose its semantic name to be 'POSITION', which is the default name nkGraphics will look for in its built-in shader programs, for positions.
nkGraphics will sometimes require some attributes to represent something (positions, normals...).
In such case, the input layout can set specific names through the annotations. Please see the documentation for more information on that matter.

To conclude on the layout, the attribute does not require anything different from the defaults. When not overloaded in the attribute description, the attribute offset and buffer stride will be computed automatically based on the input layout. But beware this depends on the order the attributes are added to the layout. Attributes fed first will be considered first in the buffer they are tied to.

Last step is to let the mesh know about the data, the layout, and how many points there are inside (this drives the drawing commands).
One interesting point is that a Mesh offers different ways to provide data to it. Data can be copied over, forwarded, or simply referenced. Depending on the lifetime and usage of your data, you can choose either freely.
In this case, we know we won't use the buffer's data afterwards, so we choose to forward the data to the mesh, to avoid copying data over.
Note that copying would be possible in this case although wasteful. However, referencing would be dangerous as the mesh loads asynchroneously, and our buffer would free its data when going out of scope.

Let's work towards next step :

// Prepare an index buffer to describe mesh surface nkMemory::BufferCast<unsigned int> indexArray (3) ; // Index data, forward is counter clockwise, we need to take it into account for back face culling indexArray[0] = 0 ; indexArray[1] = 2 ; indexArray[2] = 1 ; // Give information to the mesh mesh->setIndexBufferForward(indexArray.relinquishBufferOwnership()) ; mesh->setIndexCount(3) ;

In this example, we will provide index data. This is optional, especially in this case, as non-indexed vertex data will be interpreted as a list of vertices, interpreted according to their topology (triangle list in this case). But for demonstration purpose, we will go with it.

Note how we index by switching the vertices at index 1 and 2, to account for counter-clockwise front. This is the default for the rasterization state when drawing.
What is left is to forward the buffer we won't reuse, and set the index count for drawing.

Finally, the mesh can be loaded. Here is another pattern you will often encounter in the engine. Resources are always set with :

  1. Create the resource. It can be used right away, with a default behaviour
  2. Change all parameters required
  3. Request a loading of the resource. If it succeeds, the behaviour will now use the parameters provided
  4. Use, change parameters and reload if required
  5. Erase, unload

Such a pattern allows :

With that in mind, a last call has to be made :

mesh->load() ;

Now our mesh is ready to be used ! If the loading step fails, this method will return false and it will log what went wrong.

Alternative : working with the utils API

The basic API allows for a lot of control, which provides all the flexibility you could need when designing your data formats. However, sometimes you just want a fast and easy way to setup a mesh. This is where MeshUtils can help you.

Let's include what we need from it :

// Needed to manipulate the higher level API #include <NilkinsGraphics/Meshes/Utils/MeshUtils.h> #include <NilkinsGraphics/Meshes/Utils/VertexComposition.h> #include <NilkinsGraphics/Meshes/Utils/VertexData.h>

We can then use high-level structures to fill in what we want the mesh to offer. Keep in mind that this data will be translated into lower level buffers and input layout, meaning that there is overhead to use it. Let's see how to work with it :

// Create a buffer of 3 vertices to populate using the structure given nkMemory::BufferCast<nkGraphics::VertexData> pointArray (3) ; pointArray[0]._position = nkMaths::Vector(-1.f, -1.f, 10.f) ; pointArray[1]._position = nkMaths::Vector(0.f, 1.f, 10.f) ; pointArray[2]._position = nkMaths::Vector(1.f, -1.f, 10.f) ; // Prepare the composition we want, with position only nkGraphics::VertexComposition composition ; composition._hasPositions = true ; composition._hasColors = false ; composition._hasTexCoords = false ; composition._hasNormals = false ; composition._hasTangents = false ; composition._hasBinormals = false ; // Request translation of both structures into data we can feed to the mesh nkGraphics::PackedMeshData meshData = nkGraphics::MeshUtils::packIntoMeshData(pointArray, composition) ; // And fit into the mesh (stride will be computed automatically from layout, data being tightly packed) mesh->addVertexBufferForward(std::move(meshData._vertexBuffer)) ; mesh->setInputLayout(meshData._inputLayout) ; mesh->setVertexCount(3) ;

We create a buffer of VertexData, each entry representing one vertex.
This structure then has members you can feed with the data you need. To make the parallel with last example, we will do the same triangle using this approach. This means we feed the position attribute of the structure, for each vertex.

Then, by filling the VertexComposition structure, we provide information about what data translation should take into account :

With both objects, we can call MeshUtils::packIntoMeshData, which will translate our list of points and the composition requested into a binary buffer and an input layout.
This packed data can then be used to feed the mesh, without having to worry about alignment or format issues.
Data can be freed using the same forward operation in this case : we won't reuse generated data, so this is perfectly safe.

Remaining operations (index buffer, loading) can be done exactly the same way as in our first example.
This operation only translates vertices into a compact version usable by the mesh. Indexing, topology and such can still be freely altered.

Using the render queue

Now that the mesh is ready, we need to tell the component that we want it painted during the rendering step. For that, we need to tinker with render queues.
A render queue is exactly what its name implies : it queues objects that need to be rendered. They are used within the passes used for image composition. We will cover this in a later tutorial, so for now let's focus on the rendering queue. First, include :

#include <NilkinsGraphics/RenderQueues/RenderQueue.h> #include <NilkinsGraphics/RenderQueues/RenderQueueManager.h>

With those includes, we can start messing with the render queues :

nkGraphics::RenderQueue* rq = nkGraphics::RenderQueueManager::getInstance()->get(nkGraphics::RenderQueueManager::DEFAULT_RENDER_QUEUE) ;

Here we get the queue number 0. This is the queue used by default image composition when painting the scene. As a result, by altering it, we will change what is rendered right away.

nkGraphics::Entity* ent = rq->addEntity() ; nkGraphics::SubEntity* subEnt = ent->addChild() ; subEnt->setMesh(mesh) ;

What is happening here is that we are adding an "entity" to the rendering queue. An entity is representing an object enqueued for rendering. On it, we will be able to set all information we need.
An entity is constituted by sub entities. Each sub entity represents a mesh that can be rendered. As a result, one entity can be composed of many meshes.
To set the mesh, we add an entity to the queue, declare a sub entity on it, before being able to set the mesh.

With all of that we are ready to give a new run on our program. Let's see what it looks like :

Beautiful triangle
Our triangle is visible on screen !

To recap, for a mesh, we need to :

  1. Create the mesh, prepare its data (using whatever way you want) and load it
  2. Request a render queue we want to see it in. A mesh can be set on as many queues as needed
  3. Prepare the entity on the queue, and its sub entity with the mesh

By respecting all these steps, a mesh can easily be part of the rendering pipeline.

Let's get a more complicated mesh

Using a file as source

Creating our own mesh from scrach can be useful in some cases, but in others we might want to load an already prepared mesh, out of a file. Let's load a sphere from an obj file, that you will find within the release Data folder (Meshes/sphere.obj).
Also, keep track of the folder given to the ResourceManager. As a reminder, the working path set within it will be considered the root folder to use.

So let's take things back after mesh creation from the manager. The only step we will need is :

mesh->setResourcePath("sphere.obj") ;

Loading the mesh, provided the file can be found from the root path, will load the file data, and feed the mesh with it automatically.
For that, it will query the CompositeEncoder with sources found, using default decoding options.
If successfully parsed, the mesh will use the first DecodedMeshData found to automatically create its vertex buffers. And with all of that, the sphere should be ready.

However, in this case, we would want to have more control. Linked mesh has been exported with inverted texture coordinates on the Y axis, and the front was -Z, altering the winding order within the triangles. Let's see how we can address these issues. Let's include :

#include <NilkinsGraphics/Encoders/Obj/ObjEncoder.h>

The ObjEncoder is the class that will be able to interpret the Obj format and create mesh information from it.
Let's see how we can use it to load data, while altering the way it behaves :

// Load data from file nkMemory::String absPath = nkResources::ResourceManager::getInstance()->getAbsoluteFromWorkingDir("sphere.obj") ; nkMemory::Buffer objData = nkResources::ResourceManager::getInstance()->loadFileIntoMemory(absPath) ; // Request decoding, here we know the format so we address the ObjEncoder directly // We change some settings to alter the way the mesh is imported nkGraphics::ObjDecodeOptions objOptions ; objOptions._invertUvY = true ; objOptions._invertWindingOrder = true ; nkGraphics::DecodedData objDecoded = nkGraphics::ObjEncoder::decode(objData, objOptions) ; // We can then either fill the mesh manually through the decoded mesh data, or request the Utils to do it for us here // We know we won't use the decoded data later, so we request the use of memory forwarding during filling, rather than copying nkGraphics::MeshFillOptions fillOptions ; fillOptions._autoLoad = true ; fillOptions._dataFillType = nkGraphics::DATA_FILL_TYPE::FORWARD ; nkGraphics::MeshUtils::fillMeshFromDecodedData(objDecoded._meshData[0], mesh, fillOptions) ;

First thing we need is the data the encoder needs to interpret. For that, we use the ResourceManager to find back the absolute path and load the data into a buffer.
Then, we prepare some options for the encoder to alter the way it imports the data. To account for the texture coordinates, we ask the encoder to invert the Y axis. Then, the front -Z axis is a problem for our default winding order, so we request it to invert the winding order to fix the direction the surface is facing.
Once all of that is setup, we can request the parsing of data we retrieved, with the options requested.

Decoding can provide with more than 1 mesh. Obj files can have many groups, and future formats can expose many primitives or meshes too. As such, it is important to know which mesh you want to import, or prepare a logic to cope with that. In all cases, we know that for this file, we have one mesh, so we will use the first entry.
We could interpret all the information from the DecodedMeshData structure ourselves, but there is also a utility function, MeshUtils::fillMeshFromDecodedData, that we use here. Like for decoding, there are options we can tune for the filling. We choose to let it forward data (which means our structure will be altered) to the mesh and load it for us once ready.

With this, we can alter how the mesh is decoded. This can prove useful to work with different exporters without having to reprocess the data.

So now, we have altered the decoding logic to cope with our local space, decoded the obj data, filled the mesh from the decoded information and automatically loaded it. Let's see what we currentlty have on screen :

Mesh disappeared
A sphere, yes yes

Current image is not very encouraging, but fear not ! This is normal. First, ensure that the MeshLoader is not complaining that the file cannot be loaded. If it is, ensure the path you give is correct and relative to the working path set within the ResourceManager.
Then, if nothing is logged, the mesh has been successfully loaded and is being displayed. But why aren't we seeing anything ?

Positioning meshes in the world

You maybe guessed it, our mesh is simply centered right where our camera is, in (0, 0, 0). As a result, we are seeing the sphere from inside. The back faces being culled away by default, the component doesn't render anything.

The way to correct that is to move it, through the use of a node graph. Each Entity can be linked to a Node, that will give its position, orientation, and scale within a scene graph. By default, an Entity won't be tied to any node, which means its world coordinates will correspond to its model coordinates.
Let's change that, and include :

#include <NilkinsGraphics/Graph/Node.h> #include <NilkinsGraphics/Graph/NodeManager.h>

With all of this, we will be able to manipulate nodes, like this :

nkGraphics::Node* node = nkGraphics::NodeManager::getInstance()->create() ; node->setPositionAbsolute(nkMaths::Vector(0.f, 0.f, 10.f)) ; ent->setParentNode(node) ;

First, request a creation from the manager. We don't request any name as we don't require any, the node will be named after a counter.
Then, this node will be translated within the world, as we set its absolute position. We set it at 10 units in front of the camera.
The node is then assigned to the entity, so that it can transform its positioning. Let's see the effect of those lines :

The sphere
See, it was there this whole time !

Now our sphere is set further away from the camera, and we can witness it in its glorious shape, from outside. Using the node graph is optional, but when you need to move objects around, it is the way to go.

And this covers the basic interactions with meshes !